不断增加的材料科学文章使得很难从已发表的文献中推断化学结构 - 培训关系。我们使用自然语言处理(NLP)方法从聚合物文献的摘要中自动提取材料属性数据。作为我们管道的组成部分,我们使用240万材料科学摘要培训了一种语言模型的材料,该材料模型在用作文本编码器时,在五分之三命名实体识别数据集中的其他基线模型都优于其他基线模型。使用此管道,我们在60小时内从约130,000个摘要中获得了约300,000个物质记录。分析了提取的数据,分析了各种应用,例如燃料电池,超级电容器和聚合物太阳能电池,以恢复非平凡的见解。通过我们的管道提取的数据可通过https://polymerscholar.org的Web平台提供,该数据可方便地定位摘要中记录的材料属性数据。这项工作证明了自动管道的可行性,该管道从已发布的文献开始,并以一组完整的提取物质属性信息结束。
translated by 谷歌翻译
近年来,深度学习(DL)算法的使用改善了基于视觉的空间应用的性能。但是,生成大量的注释数据来培训这些DL算法已被证明具有挑战性。虽然可以使用合成生成的图像,但在实际环境中测试时,经过合成数据训练的DL模型通常容易受到性能降解。在这种情况下,卢森堡大学的安全,可靠性和信任(SNT)跨学科中心开发了“ SNT Zero-G Lab”,用于在模拟现实世界太空环境的条件下培训和验证基于视觉的空间算法。 SNT Zero-G实验室开发的一个重要方面是设备选择。从实验室开发过程中学到的经验教训,本文提出了一种系统的方法,将市场调查和设备选择的实验分析结合在一起。特别是,本文专注于太空实验室中的图像采集设备:背景材料,相机和照明灯。实验分析的结果表明,在太空实验室开发项目中选择有效的设备选择需要通过实验分析来称赞的市场调查。
translated by 谷歌翻译
已经证明,基于自我监督的学习(SSL)模型可以生成强大的表示,可用于改善下游语音任务的性能。可以使用几种最先进的SSL模型,并且这些模型中的每一个都优化了不同的损失,这会导致其功能互补的可能性。本文提出了使用此类SSL表示和模型的集合,该集合利用了各种预审预周化模型提取的特征的互补性质。我们假设这导致了更丰富的特征表示,并显示了ASR下游任务的结果。为此,我们使用了三个SSL模型,这些模型在ASR任务上显示出了出色的结果,即Hubert,Wav2Vec2.0和小波。我们使用从预训练的模型获得下游ASR任务的嵌入方式来探索用于ASR任务的模型集合和功能集合。我们使用LiblisPeech(100H)和WSJ数据集的单个模型和预训练的功能获得了改进的性能,用于下游任务。
translated by 谷歌翻译
自我监督的学习(SSL)在各种与语音有关的下游任务(包括自动语音识别(ASR))中表现出巨大的成功。 SSL模型的输出嵌入被视为语音信号的强大短期表示。但是,在ASR任务中,主要目标是获得正确的声学单元,字符或字节对编码(BPE)的正确顺序。通常,对于ASR等序列到序列任务,编码器解码器架构非常出色。因此,在本文中,我们提出了一个新的范式,该范式在自学学习过程中利用解码器的力量。我们使用隐藏的单位Bert(Hubert)SSL框架来计算编码器的常规掩蔽预测损失。此外,我们在SSL框架中引入了解码器,并为解码器提出了目标准备策略。最后,我们使用多任务SSL设置,其中我们共同优化编码器和解码器损耗。我们假设SSL模型中的解码器的存在有助于它学习基于声学单元的语言模型,这可能会改善ASR下游任务的性能。我们将我们提出的SSL模型与Hubert进行了比较,并通过对各种LibrisPeech子集进行填充,在ASR上的性能相对相对提高了25%。
translated by 谷歌翻译
Variational inference uses optimization, rather than integration, to approximate the marginal likelihood, and thereby the posterior, in a Bayesian model. Thanks to advances in computational scalability made in the last decade, variational inference is now the preferred choice for many high-dimensional models and large datasets. This tutorial introduces variational inference from the parametric perspective that dominates these recent developments, in contrast to the mean-field perspective commonly found in other introductory texts.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
The performance of inertial navigation systems is largely dependent on the stable flow of external measurements and information to guarantee continuous filter updates and bind the inertial solution drift. Platforms in different operational environments may be prevented at some point from receiving external measurements, thus exposing their navigation solution to drift. Over the years, a wide variety of works have been proposed to overcome this shortcoming, by exploiting knowledge of the system current conditions and turning it into an applicable source of information to update the navigation filter. This paper aims to provide an extensive survey of information aided navigation, broadly classified into direct, indirect, and model aiding. Each approach is described by the notable works that implemented its concept, use cases, relevant state updates, and their corresponding measurement models. By matching the appropriate constraint to a given scenario, one will be able to improve the navigation solution accuracy, compensate for the lost information, and uncover certain internal states, that would otherwise remain unobservable.
translated by 谷歌翻译